The search functionality is under construction.

Keyword Search Result

[Keyword] neural networks(287hit)

241-260hit(287hit)

  • Experimental Observations of 2- and 3-Neuron Chaotic Neural Networks Using Switched-Capacitor Chaotic Neuron IC Chip

    Yoshihiko HORIO  Ken SUYAMA  

     
    PAPER-Neural Networks

      Vol:
    E78-A No:4
      Page(s):
    529-535

    Switched-capacitor chaotic neurons fabricated in a full-custom integrated circuit are used to investigate the behavior of 2- and 3-neuron chaotic neural networks. Various sets of parameters are used to visualize the dynamical responses of the networks. Hysteresis of the network is also demonstrated. Lyapunov exponents are approximated from the measured data to characterize the state of each neuron. The effect of the finite length of data and the rounding effect of data acquisition system to the computation of Lyapunov exponents are briefly discussed.

  • Equivalence between Some Dynamical Systems for Optimization

    Kiichi URAHAMA  

     
    LETTER-Optimization Techniques

      Vol:
    E78-A No:2
      Page(s):
    268-271

    It is shown by the derivation of solution methods for an elementary optimization problem that the stochastic relaxation in image analysis, the Potts neural networks for combinatorial optimization and interior point methods for nonlinear programming have common formulation of their dynamics. This unification of these algorithms leads us to possibility for real time solution of these problems with common analog electronic circuits.

  • Geometric Shape Recognition with Fuzzy Filtered Input to a Backpropagation Neural Network

    Figen ULGEN  Andrew C. FLAVELL  Norio AKAMATSU  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Vol:
    E78-D No:2
      Page(s):
    174-183

    Recognition of hand drawn shapes is beneficial in drawing packages and automated sketch entry in hand-held computers. Although it is possible to store and retrieve drawings through the use of electronic ink, further manipulation of these drawings require recognition to be performed. In this paper, we propose a new approach to invariant geometric shape recognition which utilizes a fuzzy function to reduce noise and a neural network for classification. Instead of recognizing segments of a drawing and then performing syntactical analysis to match with a predefined shape, which is weak in terms of generalization and dealing with noise, we examine the shape as a whole. The main concept of the recognition method is derived from the fact that internal angles are very important in the perception of the shape. Our application's aim is to recognize and correctively redraw hand drawn ellipses, circles, rectangles, squares and triangles. The neural network learns the relationships between the internal angles of a shape and its classification, therefore only a few training samples which represent the class of the shape is sufficient. The results are very successful, such that the neural network correctly classified shapes which were not included in the training set.

  • Design and Implementations of a Learning T-Model Neural Network

    Zheng TANG  Okihiko ISHIZUKA  

     
    LETTER-Neural Networks

      Vol:
    E78-A No:2
      Page(s):
    259-263

    In this letter, we demonstrate an experimental CMOS neural circuit towards an understanding of how particular computations can be performed by a T-Model neural network. The architecture and a digital hardware implementation of the learning T-Model network are presented. Our experimental results show that the T-Model allows immense collective network computations and powerful learning.

  • Power Law Slowdown of the Neural Learning

    Hideyuki CÂTEAU  Tatsuhiro NAKAJIMA  Hiroshi NUNOKAWA  Nobuko FUCHIKAMI  

     
    LETTER-Neural Networks

      Vol:
    E77-A No:12
      Page(s):
    2109-2111

    We numerically show that the learning time t of the back propagation model with the encoder topology obeys a power law described as t MD (D: constant, 1

  • Neural Networks for Digital Sequential Circuits

    Hiroshi NINOMIYA  Hideki ASAI  

     
    LETTER-Neural Networks

      Vol:
    E77-A No:12
      Page(s):
    2112-2115

    In this letter an SR-latch circuit using Hopfield neural networks is introduced. An energy function suited for a neural SR-latch circuit is defined for which the global convergence is guaranteed. We also demonstrate how to compose master-slave (M/S) SR- and JK-flip flops of novel SR-latch circuits, and further an asynchronous binary counter of M/S JK-flip flops. Computer simulations are included to illustrate how each presented circuit operates.

  • Neural Learning of Chaotic System Behavior

    Gustavo DECO  Bernd SCHÜRMANN  

     
    PAPER-Neural Network and Its Applications

      Vol:
    E77-A No:11
      Page(s):
    1840-1845

    We introduce recurrent networks that are able to learn chaotic maps, and investigate whether the neural models also capture the dynamical invariants (Correlation Dimension, largest Lyapunov exponent) of chaotic time series. We show that the dynamical invariants can be learned already by feedforward neural networks, but that recurrent learning improves the dynamical modeling of the time series. We discover a novel type of overtraining which corresponds to the forgetting of the largest Lyapunov exponent during learning and call this phenomenon dynamical overtraining. Furthermore, we introduce a penalty term that involves a dynamical invariant of the network and avoids dynamical overtraining. As examples we use the Hnon map, the logistic map and a real world chaotic series that correspond to the concentration of one of the chemicals as a function of time in experiments on the Belousov–Zhabotinskii reaction in a well–stirred flow reactor.

  • Optoelectronic Mesoscopic Neural Devices

    Hideaki MATSUEDA  

     
    PAPER-Neural Network and Its Applications

      Vol:
    E77-A No:11
      Page(s):
    1851-1854

    A novel optoelectronic mesoscopic neural device is proposed. This device operates in a neural manner, involving the electron interference and the laser threshold characteristics. The optical output is a 2–dimensional image, and can also be colored, if the light emitting elements are fabricated to form the picture elements in 3–colors, i.e. R, G, and B. The electron waveguiding in the proposed device is analyzed, on the basis of the analogy between the Schrödinger's equation and the Maxwell's wave equation. The nonlinear neural connection is achieved, as a result of the superposition an the interferences among electron waves transported through different waveguides. The sizes of the critical elements of this device are estimated to be within the reach of the present day technology. This device exceeds the conventional VLSI neurochips by many orders of magnitude, in the number of neurons per unit area, as well as in the speed of operation.

  • Investigation and Analysis of Hysteresis in Hopfield and T–Model Neural Networks

    Zheng TANG  Okihiko ISHIZUKA  Masakazu SAKAI  

     
    PAPER-Neural Networks

      Vol:
    E77-A No:11
      Page(s):
    1970-1976

    We report on an experimental hysteresis in the Hopfield networks and examine the effect of the hysteresis on some important characteristics of the Hopfield networks. The detail mathematic description of the hysteresis phenomenon in the Hopfield networks is given. It suggests that the hysteresis results from fully–connected interconnection of the Hopfield networks and the hysteresis tends to makes the Hopfield networks difficult to reach the global minimum. This paper presents a T–Model network approach to overcoming the hysteresis phenomenon by employing a half–connected interconnection. As a result, there is no hysteresis phenomenon found in the T–Model networks. Theoretical analysis of the T–Model networks is also given. The hysteresis phenomenon in the Hopfield and the T–Model networks is illustrated through experiments and simulations. The experiments agree with the theoretical analysis very well.

  • A Pattern Classifier--Modified AFC, and Handwritten Digit Recognition

    Yitong ZHANG  Hideya TAKAHASHI  Kazuo SHIGETA  Eiji SHIMIZU  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E77-D No:10
      Page(s):
    1179-1185

    We modified the adaptive fuzzy classification algorithm (AFC), which allows fuzzy clusters to grow to meet the demands of a given task during training. Every fuzzy cluster is defined by a reference vector and a fuzzy cluster radius, and it is represented as a shape of hypersphere in pattern space. Any pattern class is identified by overlapping plural hyperspherical fuzzy clusters so that it is possible to approximate complex decision boundaries among pattern classes. The modified AFC was applied to recognize handwritten digits, and performances were shown compared with other neural networks.

  • Parallel Analog Image Coding and Decoding by Using Cellular Neural Networks

    Mamoru TANAKA  Kenneth R. CROUNSE  Tamás ROSKA  

     
    PAPER-Neural Networks

      Vol:
    E77-A No:8
      Page(s):
    1387-1395

    This paper describes highly parallel analog image coding and decoding by cellular neural networks (CNNs). The communication system in which the coder (C-) and decoder (D-) CNNs are embedded consists of a differential transmitter with an internal receiver model in the feedback loop. The C-CNN encodes the image through two cascaded techniques: structural compression and halftoning. The D-CNN decodes the received data through a reconstruction process, which includes a dynamic current distribution, so that the original input to the C-CNN can be recognized. The halftoning serves as a dynamic quantization to convert each pixel to a binary value depending on the neighboring values. We approach halftoning by the minimization of error energy between the original gray image and reconstructed halftone image, and the structural compression from the viewpoints of topological and regularization theories. All dynamics are described by CNN state equations. Both the proposed coding and decoding algorithms use only local image information in a space inveriant manner, therefore errors are distributed evenly and will not introduce the blocking effects found in DCT-based coding methods. In the future, the use of parallel inputs from on-chip photodetectors would allow direct dynamic quantization and compression of image sequences without the use of multiple bit analog-to-digital converters. To validate our theory, a simulation has been performed by using the relaxation method on an 150 frame image sequence. Each input image was 256256 pixels whth 8 bits per pixel. The simulated fixed compression rate, not including the Huffman coding, was about 1/16 with a PSNR of 31[dB]35[dB].

  • Pipelining Gauss Seidel Method for Analysis of Discrete Time Cellular Neural Networks

    Naohiko SHIMIZU  Gui-Xin CHENG  Munemitsu IKEGAMI  Yoshinori NAKAMURA  Mamoru TANAKA  

     
    PAPER-Neural Networks

      Vol:
    E77-A No:8
      Page(s):
    1396-1403

    This paper describes a pipelining universal system of discrete time cellular neural networks (DTCNNs). The new relaxation-based algorithm which is called a Pipelining Gauss Seidel (PGS) method is used to solve the CNN state equations in pipelining. In the systolic system of N processor elements {PEi}, each PEi performs the convolusional computation (CC) of all cells and the preceding PEi-1 performs the CC of all cells taking precedence over it by the precedence interval number p. The expected maximum number of PE's for the speeding up is given by n/p where n means the number of cells. For its application, the encoding and decoding process of moving images is simulated.

  • Recognition of Line Shapes Using Neural Networks

    Masaji KATAGIRI  Masakazu NAGURA  

     
    PAPER

      Vol:
    E77-D No:7
      Page(s):
    754-760

    We apply neural networks to implement a line shape recognition/classification system. The purpose of employing neural networks is to eliminate target-specific algorithms from the system and to simplify the system. The system needs only to be trained by samples. The shapes are captured by the following operations. Lines to be processed are segmented at inflection points. Each segment is extended from both ends of it in a certain percentage. The shape of each extended segment is captured as an approximate curvature. Curvature sequence is normalized by size in order to get a scale-invariant measure. Feeding this normalized curvature date to a neural network leads to position-, rotation-, and scale-invariant line shape recognition. According to our experiments, almost 100% recognition rates are achieved against 5% random modification and 50%-200% scaling. The experimental results show that our method is effective. In addition, since this method captures shape locally, partial lines (caused by overlapping etc.) can also be recognized.

  • A Memory-Based Recurrent Neural Architecture for Chip Emulating Cortical Visual Processing

    Luigi RAFFO  Silvio P. SABATINI  Giacomo INDIVERI  Giovanni NATERI  Giacomo M. BISIO  

     
    PAPER

      Vol:
    E77-C No:7
      Page(s):
    1065-1074

    The paper describes the architecture and the simulated performances of a memory-based chip that emulates human cortical processing in early visual tasks, such as texture segregation. The featural elements present in an image are extracted by a convolution block and subsequently processed by the cortical chip, whose neurons, organized into three layers, gain relational descriptions (intelligent processing) through recurrent inhibitory/excitatory interactions between both inter-and intra-layer parallel pathways. The digital implementation of this architecuture directly maps the set of equations determining the status of the cortical network to achieve an optimal exploitation of VLSI technology in neural computation. Neurons are mapped into a memory matrix whose elements are updated through a programmable computational unit that implements synaptic interconnections. By using 0.5 µm-CMOS technology, full cortical image processing can be attained on a single chip (2020 mm2 die) at a rate higher than 70 frames/second, for images of 256256 pixels.

  • A Correcting Method for Pitch Extraction Using Neural Networks

    Akio OGIHARA  Kunio FUKUNAGA  

     
    PAPER-Neural Networks

      Vol:
    E77-A No:6
      Page(s):
    1015-1022

    Pitch frequency is a basic characteristic of human voice, and pitch extraction is one of the most important studies for speech recognition. This paper describes a simple but effective technique to obtain correct pitch frequency from candidates (pitch candidates) extracted by the short-range autocorrelation function. The correction is performed by a neural network in consideration of the time coutinuation that is realized by referring to pitch candidates at previous frames. Since the neural network is trained by the back-propagation algorithm with training data, it adapts to any speaker and obtains good correction without sensitive adjustment and tuning. The pitch extraction was performed for 3 male and 3 female announcers, and the proposed method improves the percentage of correct pitch from 58.65% to 89.19%.

  • Design and Simulation of Neural Network Digital Sequential Circuits

    Hiroshi NINOMIYA  Hideki ASAI  

     
    PAPER-Analog Circuits and Signal Processing

      Vol:
    E77-A No:6
      Page(s):
    968-976

    This paper describes a novel technique to realize high performance digital sequential circuits by using Hopfield neural networks. For an example of applications of neural networks to digital circuits, a novel gate circuit, full adder circuit and latch circuit using neural networks, which have the global convergence property, are proposed. Here, global convergence means that the energy function is monotonically decreasing and each circulit always operates correctly independently of the initial values. Finally the several digital sequential circuits such as shift register and asynchronous binary counter are designed.

  • Parallel Implementations of Back Propagation Networks on a Dynamic Data-Driven Multiprocessor

    Ali M. ALHAJ  Hiroaki TERADA  

     
    PAPER-Computer Systems

      Vol:
    E77-D No:5
      Page(s):
    579-588

    The data-driven model of computation is well suited for flexible and highly parallel simulation of neural networks. First, the operational semantics of data-driven languages preserve the locality and functionality of neural networks, and naturally describe their inherent parallelism. Second, the asynchronous data-driven execution facilitates the implementation of large and scalable multiprocessor systems, which are necessary to obtain considerable degrees of simulation sppedups. In this paper, we present a dynamic data-driven multiprocessor system, and demonstrate its suitability for the paralel simulation of back propagation neural networks. Two parallel implementations are described and evaluated using an image data compression network. The system is scalable, and as a result, the performance improved proportionally with the increase in number of processors.

  • Neural Networks with Interval Weights for Nonlinear Mappings of Interval Vectors

    Kitaek KWON  Hisao ISHIBUCHI  Hideo TANAKA  

     
    PAPER-Mapping

      Vol:
    E77-D No:4
      Page(s):
    409-417

    This paper proposes an approach for approximately realizing nonlinear mappings of interval vectors by interval neural networks. Interval neural networks in this paper are characterized by interval weights and interval biases. This means that the weights and biases are given by intervals instead of real numbers. First, an architecture of interval neural networks is proposed for dealing with interval input vectors. Interval neural networks with the proposed architecture map interval input vectors to interval output vectors by interval arithmetic. Some characteristic features of the nonlinear mappings realized by the interval neural networks are described. Next, a learning algorithm is derived. In the derived learning algorithm, training data are the pairs of interval input vectors and interval target vectors. Last, using a numerical example, the proposed approach is illustrated and compared with other approaches based on the standard back-propagation neural networks with real number weights.

  • A Stochastic Parallel Algorithm for Supervised Learning in Neural Networks

    Abhijit S. PANDYA  Kutalapatata P. VENUGOPAL  

     
    PAPER-Learning

      Vol:
    E77-D No:4
      Page(s):
    376-384

    The Alopex algorithm is presented as a universal learning algorithm for neural networks. Alopex is a stochastic parallel process which has been previously applied in the theory of perception. It has also been applied to several nonlinear optimization problems such as the Travelling Salesman Problem. It estimates the weight changes by using only a scalar cost function which is measure of global performance. In this paper we describe the use of Alopex algorithm for solving nonlinear learning tasks by multilayer feed-forward networks. Alopex has several advantages such as, ability to escape from local minima, rapid algorithmic computation based on a scalar cost function and synchronous updation of weights. We present the results of computer simulations for several tasks, such as learning of parity, encoder problems and the MONK's problems. The learning performance as well as the generalization capacity of the Alopex algorithm are compared with those of the backpropagation procedure, and it is shown that the Alopex has specific advantages over backpropagation. An important advantage of the Alopex algorithm is its ability to extract information from noisy data. We investigate the efficacy of the algorithm for faster convergence by considering different error functions. We show that an information theoretic error measure shows better convergence characteristics. The algorithm has also been applied to more complex practical problems such as undersea target recognition from sonar returns and adaptive control of dynamical systems and the results are discussed.

  • Iterative Middle Mapping Learning Algorithm for Cellular Neural Networks

    Chen HE  Akio USHIDA  

     
    PAPER-Neural Networks

      Vol:
    E77-A No:4
      Page(s):
    706-715

    In this paper, a middle-mapping learning algorithm for cellular associative memories is presented. This algorithm makes full use of the properties of the cellular neural network so that the associative memory has some advantages compared with the memory designed by the ourter product method. It can guarantee each prototype is stored at an equilibrium point. In the practical implementation, it is easy to build up the circuit because the weight matrix presenting the connection between cells is not symmetric. The synchronous updating rule makes its associative speed very fast compared to the Hopfield associative memory.

241-260hit(287hit)